Steve Schmidt, Amazon CSO, cybersecurity, AI, re:Inforce
Steve Schmidt, Amazon chief security officer, speaks at AWS re:Inforce this week. (Amazon Photo)

This week on the GeekWire Podcast: It was a big week for cybersecurity for Seattle’s cloud giants, albeit in very different ways for each.

Microsoft President Brad Smith was in Washington, D.C., testifying before the U.S. House Homeland Security Committee about Microsoft’s security challenges — stay tuned for highlights at the end of the show.

Amazon held its annual AWS re:Inforce cloud security conference in Philadelphia. Generative AI has added some big new wrinkles to cybersecurity, and that was one of the main topics in my recent conversation with one of the people who keynoted the AWS event this week, Steve Schmidt, Amazon’s chief security officer.

Listen below, and continue reading for highlights, edited for context and clarity.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

How generative AI is changing the security landscape: “Generative AI very definitely does enable attackers to be more effective in some areas. For example, when crafting more effective phishing emails, or crafting solicitations for people to click on links or things like that, it definitely enables the attacker a lot more.

“But it can also enable the defender, because when we take advantage of generative AI, it allows our security engineering staff to be more effective. It allows us to unload a lot of the undifferentiated heavy lifting that the engineers had to do before, and to let them do that thing that humans are the best at, which is looking at the murky gray area, and sifting through the little tiny pieces that don’t seem to make sense, and putting them together in a puzzle picture that all of a sudden makes them go, ‘Aha. Alright. I know what’s going on here.’

“In most cases, when we apply generative AI to the security work that we have to do, we end up with happier security engineers on the other end, because ultimately, they don’t want to do the boring, laborious stuff. They want to apply their minds. They want to think about the interesting angles to the stuff that they’re working on — the stuff that generative AI can’t do right now.”

One use case for generative AI in security: “An easy example is plain language summarization of very complex events. If you think about the security job that I’ve got here, a lot of it is taking little tiny pieces of technical data and forming them into a story about what’s going on.

“Creating that story, and then taking that information and conveying it to business owners, is something that every security professional has to do. It is arguably one of the hardest parts of our job — taking something that’s incredibly complex, technical and nuanced, and putting it in a language that makes sense to a chief financial officer, or a chief executive officer. Generative AI is actually turning out to be very useful in that space.”

Three big questions companies should ask to adopt generative AI securely:

  1. Where is our data? “Business teams are sending data to an LLM for processing, either for training or to help build and customize the model, or through queries when they use that model. How has that data been handled throughout that workflow? How was it secured? Those are critical things to understand.”
  2. What happens with my query, and any associated data? “Training data isn’t the only sensitive data set you need to be concerned about when users start to embrace generative AI and LLMs. So if your user queries an AI application, is the output from that query and the user’s reaction to the results used to train the model further? What about the file that that user submitted as part of the query?”
  3. Is the output from these models accurate enough? “The quality of the outputs from these models is steadily improving. And security teams can use generative AI as one of the tools to address challenges. From the security perspective, it’s really the use case that defines the relative risk.”

How Schmidt’s past experience at the FBI informs his approach: “The thing that I took the most out of my experience at the FBI was a focus on the people behind adverse actions. A lot of my career, for example, I was focused on Russian and Chinese counterintelligence. If you look at the motivators for espionage, in the classic world, they’re exactly the same things that are motivators for hackers, right now. It’s money, ideology, coercion, or ego.”

What he gets from his volunteer work as an EMT and firefighter: “As people, we crave feedback. We want to see that we are successful, we want to see that what we do matters. And in the computer world, a lot of what we’re dealing with is virtual. So it’s really hard to see the result of your action. It’s also really hard to see an individual impact in an area where you’re looking at, like I am, hundreds of millions of machines.

“Being a volunteer firefighter, and advanced Emergency Medical Technician, means that if I do my job well, an individual human being who I can see and touch has a better day. And I get that real human feedback that isn’t available from a computer. That’s incredibly satisfying. As a person, I know, I am personally bringing value to this; I am helping that person in a situation which may have been the worst day of their lives, and we’re going to make it better.”

Listen to the full conversation above, or subscribe to the GeekWire Podcast in Apple Podcasts, Spotify, or wherever you listen.

Audio editing by Curt Milton.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.